Learning Object Context for Dense Captioning
نویسندگان
چکیده
منابع مشابه
Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning
Dense video captioning is a newly emerging task that aims at both localizing and describing all events in a video. We identify and tackle two challenges on this task, namely, (1) how to utilize both past and future contexts for accurate event proposal predictions, and (2) how to construct informative input to the decoder for generating natural event descriptions. First, previous works predomina...
متن کاملContrastive Learning for Image Captioning
Image captioning, a popular topic in computer vision, has achieved substantial progress in recent years. However, the distinctiveness of natural descriptions is often overlooked in previous work. It is closely related to the quality of captions, as distinctive captions are more likely to describe images with their unique aspects. In this work, we propose a new learning method, Contrastive Learn...
متن کاملStack-Captioning: Coarse-to-Fine Learning for Image Captioning
The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multistage prediction framework for image captioning, composed of multiple decoders each of which...
متن کاملLearning visual context for object detection ∗
Kontekst ima pomembno vlogo pri splošnem zaznavanju prizorov, saj zagotavlja dodatno informacijo o možnih lokacijah objektov v slikah. Detektorji objektov, ki se uporabljajo v računalnǐskem vidu, tovrstne informacijo običajno ne izkoristijo. V članku bomo zato predstavili koncept, kako se lahko kontekstualne informacije naučimo iz primerov slik prizorov. To informacijo bomo uporabili za izračun...
متن کاملLearning to Guide Decoding for Image Captioning
Recently, much advance has been made in image captioning, and an encoder-decoder framework has achieved outstanding performance for this task. In this paper, we propose an extension of the encoder-decoder framework by adding a component called guiding network. The guiding network models the attribute properties of input images, and its output is leveraged to compose the input of the decoder at ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2019
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v33i01.33018650